30 research outputs found

    Unsaturated Throughput Analysis of IEEE 802.11 in Presence of Non Ideal Transmission Channel and Capture Effects

    Full text link
    In this paper, we provide a throughput analysis of the IEEE 802.11 protocol at the data link layer in non-saturated traffic conditions taking into account the impact of both transmission channel and capture effects in Rayleigh fading environment. The impact of both non-ideal channel and capture become important in terms of the actual observed throughput in typical network conditions whereby traffic is mainly unsaturated, especially in an environment of high interference. We extend the multi-dimensional Markovian state transition model characterizing the behavior at the MAC layer by including transmission states that account for packet transmission failures due to errors caused by propagation through the channel, along with a state characterizing the system when there are no packets to be transmitted in the buffer of a station. Finally, we derive a linear model of the throughput along with its interval of validity. Simulation results closely match the theoretical derivations confirming the effectiveness of the proposed model.Comment: To appear on IEEE Transactions on Wireless Communications, 200

    Saturation Throughput Analysis of IEEE 802.11 in Presence of Non Ideal Transmission Channel and Capture Effects

    Full text link
    In this paper, we provide a saturation throughput analysis of the IEEE 802.11 protocol at the data link layer by including the impact of both transmission channel and capture effects in Rayleigh fading environment. Impacts of both non-ideal channel and capture effects, specially in an environment of high interference, become important in terms of the actual observed throughput. As far as the 4-way handshaking mechanism is concerned, we extend the multi-dimensional Markovian state transition model characterizing the behavior at the MAC layer by including transmission states that account for packet transmission failures due to errors caused by propagation through the channel. This way, any channel model characterizing the physical transmission medium can be accommodated, including AWGN and fading channels. We also extend the Markov model in order to consider the behavior of the contention window when employing the basic 2-way handshaking mechanism. Under the usual assumptions regarding the traffic generated per node and independence of packet collisions, we solve for the stationary probabilities of the Markov chain and develop expressions for the saturation throughput as a function of the number of terminals, packet sizes, raw channel error rates, capture probability, and other key system parameters. The theoretical derivations are then compared to simulation results confirming the effectiveness of the proposed models.Comment: To appear on IEEE Transactions on Communications, 200

    Design and optimization of a portable LQCD Monte Carlo code using OpenACC

    Full text link
    The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core GPUs, exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenACC, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.Comment: 26 pages, 2 png figures, preprint of an article submitted for consideration in International Journal of Modern Physics

    Portable multi-node LQCD Monte Carlo simulations using OpenACC

    Get PDF
    This paper describes a state-of-the-art parallel Lattice QCD Monte Carlo code for staggered fermions, purposely designed to be portable across different computer architectures, including GPUs and commodity CPUs. Portability is achieved using the OpenACC parallel programming model, used to develop a code that can be compiled for several processor architectures. The paper focuses on parallelization on multiple computing nodes using OpenACC to manage parallelism within the node, and OpenMPI to manage parallelism among the nodes. We first discuss the available strategies to be adopted to maximize performances, we then describe selected relevant details of the code, and finally measure the level of performance and scaling-performance that we are able to achieve. The work focuses mainly on GPUs, which offer a significantly high level of performances for this application, but also compares with results measured on other processors.Comment: 22 pages, 8 png figure

    Roberge-Weiss endpoint and chiral symmetry restoration in Nf=2+1 QCD

    Get PDF
    We investigate the fate of the Roberge-Weiss endpoint transition and its connection with the restoration of chiral symmetry as the chiral limit of Nf=2+1N_f = 2+1 QCD is approached. We adopt a stout staggered discretization on lattices with Nt=4N_t = 4 sites in the temporal direction; the chiral limit is approached maintaining a constant physical value of the strange-to-light mass ratio and exploring three different light quark masses, corresponding to pseudo-Goldstone pion masses mπ100,70m_\pi \simeq 100, 70 and 50 MeV around the transition. A finite size scaling analysis provides evidence that the transition remains second order, in the 3D Ising universality class, in all the explored mass range. The residual chiral symmetry of the staggered action also allows us to investigate the relation between the Roberge-Weiss endpoint transition and the chiral restoration transition as the chiral limit is approached: our results, including the critical scaling of the chiral condensate, are consistent with a coincidence of the two transitions in the chiral limit; however we are not able to discern the symmetry controlling the critical behavior, because the critical indexes relevant to the scaling of the chiral condensate are very close to each other for the two possible universality classes (3D Ising or O(2)).Comment: 12 pages, 18 figure, 5 table

    Comparative expression pathway analysis of human and canine mammary tumors

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Spontaneous tumors in dog have been demonstrated to share many features with their human counterparts, including relevant molecular targets, histological appearance, genetics, biological behavior and response to conventional treatments. Mammary tumors in dog therefore provide an attractive alternative to more classical mouse models, such as transgenics or xenografts, where the tumour is artificially induced. To assess the extent to which dog tumors represent clinically significant human phenotypes, we performed the first genome-wide comparative analysis of transcriptional changes occurring in mammary tumors of the two species, with particular focus on the molecular pathways involved.</p> <p>Results</p> <p>We analyzed human and dog gene expression data derived from both tumor and normal mammary samples. By analyzing the expression levels of about ten thousand dog/human orthologous genes we observed a significant overlap of genes deregulated in the mammary tumor samples, as compared to their normal counterparts. Pathway analysis of gene expression data revealed a great degree of similarity in the perturbation of many cancer-related pathways, including the 'PI3K/AKT', 'KRAS', 'PTEN', 'WNT-beta catenin' and 'MAPK cascade'. Moreover, we show that the transcriptional relationships between different gene signatures observed in human breast cancer are largely maintained in the canine model, suggesting a close interspecies similarity in the network of cancer signalling circuitries.</p> <p>Conclusion</p> <p>Our data confirm and further strengthen the value of the canine mammary cancer model and open up new perspectives for the evaluation of novel cancer therapeutics and the development of prognostic and diagnostic biomarkers to be used in clinical studies.</p

    The CAFA challenge reports improved protein function prediction and new functional annotations for hundreds of genes through experimental screens

    Get PDF
    Background The Critical Assessment of Functional Annotation (CAFA) is an ongoing, global, community-driven effort to evaluate and improve the computational annotation of protein function. Results Here, we report on the results of the third CAFA challenge, CAFA3, that featured an expanded analysis over the previous CAFA rounds, both in terms of volume of data analyzed and the types of analysis performed. In a novel and major new development, computational predictions and assessment goals drove some of the experimental assays, resulting in new functional annotations for more than 1000 genes. Specifically, we performed experimental whole-genome mutation screening in Candida albicans and Pseudomonas aureginosa genomes, which provided us with genome-wide experimental data for genes associated with biofilm formation and motility. We further performed targeted assays on selected genes in Drosophila melanogaster, which we suspected of being involved in long-term memory. Conclusion We conclude that while predictions of the molecular function and biological process annotations have slightly improved over time, those of the cellular component have not. Term-centric prediction of experimental annotations remains equally challenging; although the performance of the top methods is significantly better than the expectations set by baseline methods in C. albicans and D. melanogaster, it leaves considerable room and need for improvement. Finally, we report that the CAFA community now involves a broad range of participants with expertise in bioinformatics, biological experimentation, biocuration, and bio-ontologies, working together to improve functional annotation, computational function prediction, and our ability to manage big data in the era of large experimental screens.Peer reviewe

    The CAFA challenge reports improved protein function prediction and new functional annotations for hundreds of genes through experimental screens

    Get PDF
    BackgroundThe Critical Assessment of Functional Annotation (CAFA) is an ongoing, global, community-driven effort to evaluate and improve the computational annotation of protein function.ResultsHere, we report on the results of the third CAFA challenge, CAFA3, that featured an expanded analysis over the previous CAFA rounds, both in terms of volume of data analyzed and the types of analysis performed. In a novel and major new development, computational predictions and assessment goals drove some of the experimental assays, resulting in new functional annotations for more than 1000 genes. Specifically, we performed experimental whole-genome mutation screening in Candida albicans and Pseudomonas aureginosa genomes, which provided us with genome-wide experimental data for genes associated with biofilm formation and motility. We further performed targeted assays on selected genes in Drosophila melanogaster, which we suspected of being involved in long-term memory.ConclusionWe conclude that while predictions of the molecular function and biological process annotations have slightly improved over time, those of the cellular component have not. Term-centric prediction of experimental annotations remains equally challenging; although the performance of the top methods is significantly better than the expectations set by baseline methods in C. albicans and D. melanogaster, it leaves considerable room and need for improvement. Finally, we report that the CAFA community now involves a broad range of participants with expertise in bioinformatics, biological experimentation, biocuration, and bio-ontologies, working together to improve functional annotation, computational function prediction, and our ability to manage big data in the era of large experimental screens.</p
    corecore